14 research outputs found

    Beyond Material Implication: An Empirical Study of Residuum in Knowledge Enhanced Neural Networks

    Get PDF
    openKnowledge Enchanced Neural Networks (KENN) is a neuro-symbolic architecture that exploits fuzzy logic for injecting prior knowledge, codified by propositional formulas, into a neural network. It works by adding a new layer at the end of a generic neural network that further elaborates the initial predictions accordingly to the knowledge. In the existing KENN, according to material implication rule, a conditional statement is represented as a conjunctive normal form formula. The following work extends this interpretation of the implication by using the fuzzy logic's Residuum semantic and shows how it has been integrated into the original KENN architecture, while keeping it reproducible. The Residuum integration allowed to evaluate KENN on MNIST Addition, a task that couldn't be approached by the original architecture, and the results obtained were comparable to others state of the art neuro-symbolic methods. The extended architecture has subsequently been evaluated also on visual relationships detection, showing that it could improve the performance of the original one.Knowledge Enchanced Neural Networks (KENN) is a neuro-symbolic architecture that exploits fuzzy logic for injecting prior knowledge, codified by propositional formulas, into a neural network. It works by adding a new layer at the end of a generic neural network that further elaborates the initial predictions accordingly to the knowledge. In the existing KENN, according to material implication rule, a conditional statement is represented as a conjunctive normal form formula. The following work extends this interpretation of the implication by using the fuzzy logic's Residuum semantic and shows how it has been integrated into the original KENN architecture, while keeping it reproducible. The Residuum integration allowed to evaluate KENN on MNIST Addition, a task that couldn't be approached by the original architecture, and the results obtained were comparable to others state of the art neuro-symbolic methods. The extended architecture has subsequently been evaluated also on visual relationships detection, showing that it could improve the performance of the original one

    Validation and uncertainty analysis of ASTEC in early degradation phase against QUENCH-06 experiment

    Get PDF
    Two pyrene-tetrazole conjugates were synthesized as photoreactive chromophores that allow for the first time the combination of metabolic labelling of DNA in cells and subsequent bioorthogonal “photoclick” modification triggered by visible light. Two strained alkenes and three alkene-modified nucleosides were used as reactive counterparts and revealed no major differences in their “photoclick” reactivity. This is a significant advantage because it allows 5-vinyl-2′-deoxyuridine to be applied as the smallest possible alkene-modified nucleoside for metabolic labelling of DNA in cells. Both pyrene-tetrazole conjugates show fluorogenicity during the “photoclick” reactions, which is a second advantage for cellular imaging. Living HeLa cells were incubated with 5-vinyl-2′-deoxyuridine for 48 h to ensure one cell division. After fixation, the newly synthesized genomic DNA was successfully labelled by irradiation with visible light at 405 nm and 450 nm. This method is an attractive tool for the visualization of genomic DNA in cells with full spatiotemporal control by the use of visible light as a reaction trigger

    Evaluation of a Double-Ended Guillotine Break Transient in a Three-Loop PWR-900 like with TRACE Code Coupled with DAKOTA Uncertainty Analysis

    No full text
    In the present study, the model of a reference generic three-loop PWR-900 like western type reactor has been developed and a double-ended guillotine break on the cold leg has been simulated by the TRACE code. Through the SNAP graphical interface, a DAKOTA uncertainty analysis, based on the probabilistic method to propagate input uncertainty, has been performed by selecting uncertain parameters related to the safety injection system and to the initial plant status. In particular, six uncertain input parameters have been considered: the accumulators’ initial pressure and temperature, the safety injection system temperature and flow rate, the reactor initial power and the containment initial pressure. The main figure of merit selected for the application of regression correlation is the hot rod cladding temperature. Both Pearson and Spearman’s correlation coefficients have been computed for the cladding temperature of the hot rod to characterize its correlation with the input parameters in the different phases of the transient. In addition, the dispersion of the calculated data have been evaluated for selected relevant thermal- hydraulic parameters, such as the primary pressure, the core mass flow rate and the water level in the vessel

    Evaluation of a Double-Ended Guillotine LBLOCA Transient in a Generic Three-Loops PWR-900 with TRACE Code Coupled with DAKOTA Uncertainty Analysis

    No full text
    In the present study, the model of a generic three-loops PWR-900 western type reactor has been developed and a double-ended guillotine break on the cold leg has been simulated by TRACE code. Through the SNAP graphical interface, a DAKOTA uncertainty analysis, based on the probabilistic method to propagate input uncertainty, has been performed by selecting uncertain parameters related to the safety injection system and to the initial plant status. In particular, six uncertain input parameters have been considered: the accumulators’ initial pressure and temperature, the safety injection system temperature and flow rate, the reactor initial power and the containment initial pressure. The main figure of merit selected for the application of regression correlation is the hot rod cladding temperature. Both Pearson and Spearman’s correlation coefficients have been computed for the cladding temperature of the hot rod to characterize its correlation with the input uncertain parameters in the different phases of the transient. In addition, the dispersion of the calculated data have been discussed for selected relevant thermal-hydraulic parameters, such as the primary pressure, the core mass flow rate and the water collapsed level in the vessel

    A Bayesian framework of inverse uncertainty quantification with principal component analysis and Kriging for the reliability analysis of passive safety systems

    No full text
    International audienceIn this work, we propose an Inverse Uncertainty Quantification (IUQ) approach to assigning Probability Density Functions (PDFs) to uncertain input parameters of Thermal-Hydraulic (T-H) models used to assess the reliability of passive safety systems. The approach uses experimental data within a Bayesian framework. The application to a RELAP5-3D model of the PERSEO (In-Pool Energy Removal System for Emergency Operation) facility located at SIET laboratory (Piacenza, Italy) is demonstrated. Principal Component Analysis (PCA) is applied for output dimensionality reduction and Kriging meta-modeling is used to emulate the reduced set of RELAP5-3D code outputs. This is done to decrease the computational cost of the Markov Chain Monte Carlo (MCMC) posterior sampling of the uncertain input parameters, which requires a large number of model simulations

    Passive safety systems analysis: A novel approach for inverse uncertainty quantification based on Stacked Sparse Autoencoders and Kriging metamodeling

    No full text
    International audienceIn passive safety system analysis, it is important to provide the uncertainty quantification of the Thermal-Hydraulic (T-H) code output (e.g., the amount of energy exchanged by the passive safety system during an accidental transient). This requires setting proper Probability Density Functions (PDFs) to represent the uncertainty of selected code inputs and the propagation of this uncertainty through the code. One way to obtain the PDF is by Inverse Uncertainty Quantification (IUQ) methods, which rely directly on experimental data and code simulation results. In this work, we present an innovative IUQ method based on: (i) Stacked Sparse Autoencoders (SSAEs) to reduce the problem dimensionality; and (ii) Kriging metamodels to lower the computational burden associated with the sampling of the uncertain input parameters posterior PDF by Markov Chain Monte Carlo (MCMC) (for which many model simulations are typically required). The novelty stands in the use of SSAEs for dimensionality reduction: this allows using directly the raw data available from experimental facilities or computer codes (typically characterized by small signal-to-noise ratios) without having to resort to filtering techniques, whose choice and setting are nontrivial and bias the results. The proposed approach is applied to the power exchanged by the Heat Exchanger (HX) predicted by the RELAP5-3D model of the PERSEO facility, characterized by a small signal-to-noise ratio (SNR) value. Principal Component Analysis (PCA) and SSAE are compared to explain the application of these methodologies in the context of IUQ and highlight the main advantages and drawbacks while also showing the suitability to deal with non-filtered (raw) data
    corecore